Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Clin Oncol ; 41(14): 2493-2502, 2023 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-36809050

RESUMO

PURPOSE: Metastatic papillary renal cancer (PRC) has poor outcomes, and new treatments are required. There is a strong rationale for investigating mesenchymal epithelial transition receptor (MET) and programmed cell death ligand-1 (PD-L1) inhibition in this disease. In this study, the combination of savolitinib (MET inhibitor) and durvalumab (PD-L1 inhibitor) is investigated. METHODS: This single-arm phase II trial explored durvalumab (1,500 mg once every four weeks) and savolitinib (600 mg once daily; ClinicalTrials.gov identifier: NCT02819596). Treatment-naïve or previously treated patients with metastatic PRC were included. A confirmed response rate (cRR) of > 50% was the primary end point. Progression-free survival, tolerability, and overall survival were secondary end points. Biomarkers were explored from archived tissue (MET-driven status). RESULTS: Forty-one patients treated with advanced PRC were enrolled into this study and received at least one dose of study treatment. The majority of patients had Heng intermediate risk score (n = 26 [63%]). The cRR was 29% (n = 12; 95% CI, 16 to 46), and the trial therefore missed the primary end point. The cRR increased to 53% (95% CI, 28 to 77) in MET-driven patients (n/N = 9/27) and was 33% (95% CI, 17 to 54) in PD-L1-positive tumors (n/N = 9/27). The median progression-free survival was 4.9 months (95% CI, 2.5 to 10.0) in the treated population and 12.0 months (95% CI, 2.9 to 19.4) in MET-driven patients. The median overall survival was 14.1 months (95% CI, 7.3 to 30.7) in the treated population and 27.4 months (95% CI, 9.3 to not reached [NR]) in MET-driven patients. Grade 3 and above treatment related adverse events occurred in 17 (41%) patients. There was 1 grade 5 treatment-related adverse event (cerebral infarction). CONCLUSION: The combination of savolitinib and durvalumab was tolerable and associated with high cRRs in the exploratory MET-driven subset.


Assuntos
Antígeno B7-H1 , Neoplasias Renais , Humanos , Neoplasias Renais/tratamento farmacológico , Protocolos de Quimioterapia Combinada Antineoplásica/efeitos adversos
2.
Front Neurosci ; 15: 603433, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34776834

RESUMO

Spiking neural networks (SNNs), with their inherent capability to learn sparse spike-based input representations over time, offer a promising solution for enabling the next generation of intelligent autonomous systems. Nevertheless, end-to-end training of deep SNNs is both compute- and memory-intensive because of the need to backpropagate error gradients through time. We propose BlocTrain, which is a scalable and complexity-aware incremental algorithm for memory-efficient training of deep SNNs. We divide a deep SNN into blocks, where each block consists of few convolutional layers followed by a classifier. We train the blocks sequentially using local errors from the classifier. Once a given block is trained, our algorithm dynamically figures out easy vs. hard classes using the class-wise accuracy, and trains the deeper block only on the hard class inputs. In addition, we also incorporate a hard class detector (HCD) per block that is used during inference to exit early for the easy class inputs and activate the deeper blocks only for the hard class inputs. We trained ResNet-9 SNN divided into three blocks, using BlocTrain, on CIFAR-10 and obtained 86.4% accuracy, which is achieved with up to 2.95× lower memory requirement during the course of training, and 1.89× compute efficiency per inference (due to early exit strategy) with 1.45× memory overhead (primarily due to classifier weights) compared to end-to-end network. We also trained ResNet-11, divided into four blocks, on CIFAR-100 and obtained 58.21% accuracy, which is one of the first reported accuracy for SNN trained entirely with spike-based backpropagation on CIFAR-100.

3.
Nat Commun ; 11(1): 2245, 2020 05 07.
Artigo em Inglês | MEDLINE | ID: mdl-32382036

RESUMO

Trees are used by animals, humans and machines to classify information and make decisions. Natural tree structures displayed by synapses of the brain involves potentiation and depression capable of branching and is essential for survival and learning. Demonstration of such features in synthetic matter is challenging due to the need to host a complex energy landscape capable of learning, memory and electrical interrogation. We report experimental realization of tree-like conductance states at room temperature in strongly correlated perovskite nickelates by modulating proton distribution under high speed electric pulses. This demonstration represents physical realization of ultrametric trees, a concept from number theory applied to the study of spin glasses in physics that inspired early neural network theory dating almost forty years ago. We apply the tree-like memory features in spiking neural networks to demonstrate high fidelity object recognition, and in future can open new directions for neuromorphic computing and artificial intelligence.

5.
Front Neurosci ; 14: 119, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32180697

RESUMO

Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.

6.
Philos Trans A Math Phys Eng Sci ; 378(2164): 20190157, 2020 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-31865881

RESUMO

Spiking neural networks (SNNs) offer a bio-plausible and potentially power-efficient alternative to conventional deep learning. Although there has been progress towards implementing SNN functionalities in custom CMOS-based hardware using beyond Von Neumann architectures, the power-efficiency of the human brain has remained elusive. This has necessitated investigations of novel material systems which can efficiently mimic the functional units of SNNs, such as neurons and synapses. In this paper, we present a magnetoelectric-magnetic tunnel junction (ME-MTJ) device as a synapse. We arrange these synapses in a crossbar fashion and perform in situ unsupervised learning. We leverage the capacitive nature of write-ports in ME-MTJs, wherein by applying appropriately shaped voltage pulses across the write-port, the ME-MTJ can be switched in a probabilistic manner. We further exploit the sigmoidal switching characteristics of ME-MTJ to tune the synapses to follow the well-known spike timing-dependent plasticity (STDP) rule in a stochastic fashion. Finally, we use the stochastic STDP rule in ME-MTJ synapses to simulate a two-layered SNN to perform image classification tasks on a handwritten digit dataset. Thus, the capacitive write-port and the decoupled-nature of read-write path of ME-MTJs allow us to construct a transistor-less crossbar, suitable for energy-efficient implementation of in situ learning in SNNs. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.

7.
Front Neurosci ; 13: 883, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31507361

RESUMO

We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.

8.
J Food Biochem ; 43(7): e12859, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31353706

RESUMO

The available cultivable plant-based food resources in developing tropical countries are inadequate to supply proteins for both human and animals. Such limition of available plant food sources are due to shrinking of agricultural land, rapid urbanization, climate change, and tough competition between food and feed industries for existing food and feed crops. However, the cheapest food materials are those that are derived from plant sources which although they occur in abundance in nature, are still underutilized. At this juncture, identification, evaluation, and introduction of underexploited millet crops, including crops of tribal utility which are generally rich in protein is one of the long-term viable solutions for a sustainable supply of food and feed materials. In view of the above, the present review endeavors to highlight the nutritional and functional potential of underexploited millet crops. PRACTICAL APPLICATIONS: Millets are an important food crop at a global level with a significant economic impact on developing countries. Millets have advantageous characteristics as they are drought and pest-resistance grains. Millets are considered as high-energy yielding nourishing foods which help in addressing malnutrition. Millet-based foods are considered as potential prebiotic and probiotics with prospective health benefits. Grains of these millet species are widely consumed as a source of traditional medicines and important food to preserve health.


Assuntos
Produtos Agrícolas , Abastecimento de Alimentos , Milhetes , Valor Nutritivo , Ração Animal , Anti-Infecciosos/análise , Anti-Inflamatórios/análise , Antioxidantes/análise , Países em Desenvolvimento/economia , Fibras na Dieta/análise , Fibras na Dieta/farmacologia , Grão Comestível , Flavonoides/análise , Flavonoides/farmacologia , Humanos , Milhetes/anatomia & histologia , Milhetes/química , Milhetes/genética , Fenóis/análise , Fenóis/farmacologia , Extratos Vegetais/farmacologia , Pobreza
9.
Front Neurosci ; 13: 504, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31191219

RESUMO

Liquid state machine (LSM), a bio-inspired computing model consisting of the input sparsely connected to a randomly interlinked reservoir (or liquid) of spiking neurons followed by a readout layer, finds utility in a range of applications varying from robot control and sequence generation to action, speech, and image recognition. LSMs stand out among other Recurrent Neural Network (RNN) architectures due to their simplistic structure and lower training complexity. Plethora of recent efforts have been focused toward mimicking certain characteristics of biological systems to enhance the performance of modern artificial neural networks. It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to better accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lowered number of connections and the freedom to parallelize the liquid evaluation process.

10.
Front Neurosci ; 13: 189, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30941003

RESUMO

In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.

11.
Front Neurosci ; 12: 524, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30190670

RESUMO

In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a "divide and learn" strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.

12.
Front Neurosci ; 12: 435, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30123103

RESUMO

Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.

13.
Sci Rep ; 6: 29545, 2016 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-27405788

RESUMO

Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...